829 research outputs found

    Parameter Estimation of Type-I and Type-II Hybrid Censored Data from the Log-Logistic Distribution

    Get PDF
    In experiments on product lifetime and reliability testing, there are many practical situations in which researchers terminate the experiment and report the results before all items of the experiment fail because of time or cost consideration. The most common and popular censoring schemes are type-I and type-II censoring. In type-I censoring scheme, the termination time is pre-fixed, but the number of observed failures is a random variable. However, if the mean lifetime of experimental units is somewhat larger than the pre-fixed termination time, then far fewer failures would be observed and this is a significant disadvantage on the efficiency of inferential procedures. In type-II censoring scheme, however, the number of observed failures is pre-fixed, but the experiment time is a random variable. In this case, at least pre-specified number of failure are obtained, but the termination time is clearly a disadvantage from the experimenter’s point of view. To overcome some of the drawbacks in those schemes, the hybrid censoring scheme, which is a mixture of the conventional type-I and type-II censoring schemes, has received much attention in recent years. In this paper, we consider the analysis of type-I and type-II hybrid censored data where the lifetimes of items follow two-parameter log-logistic distribution. We present the maximum likelihood estimators of unknown parameters and asymptotic confidence intervals, and a simulation study is conducted to evaluate the proposed methods

    Systematic Routine for Setting Confidence Levels for Mean Time to Failure (MTTF)

    Get PDF
    There are circumstances where an item is intentionally tested to destruction.  The purpose of this technique is to determine the failure rate (λ) of a tested item.  For these items, the quality attribute is defined as how long the item will last until failure.  Once the failure rate is determined from the number of survivors and total time of all items tested the mean time to failure (MTTF) which is a typical statistic for survival data analysis issues.  MTTF is calculated by dividing one by failure rate (λ).  From this one obtains the reliability function R(t) = e-λt where t is time.  This allows the cumulative density function F(t) = 1- e-λt  to be determined.  This density function, f(t) = λe-λt is a negative exponential with a standard deviation (σ) = 1/λ.  Thus setting a warranty policy for the tested item is difficult for the practitioner.  An important property of the exponential distribution is that it is memory less.  This means its conditional probability follows P(T > s + t |T > s)=P(T > t) for all s, t ≥0.  The exponential distribution can be used to describe the interval lengths between any two consecutive arrival times in a homogeneous Poisson process.  The purpose of this research paper is to present a simple technique to determine a realistic confidence level. Using the same technique the warranty level for the tested item can be predicted

    Managing multi-tiered suppliers in the high-tech industry

    Get PDF
    Thesis (M. Eng. in Logistics)--Massachusetts Institute of Technology, Engineering Systems Division, 2009.Includes bibliographical references (leaves 131-135).This thesis presents a roadmap for companies to follow as they manage multi-tiered suppliers in the high-tech industry. Our research covered a host of sources including interviews and publications from various companies, consulting companies, software companies, the computer industry, trade associations, and analyst firms among others. While our review found that many companies begin supplier relationship management after sourcing events, we show that managing suppliers should start as companies form their competitive strategy. Our five step roadmap provides a deliberate approach for companies as they build the foundation for effective and successful multi-tiered supplier relationship management.by Charles E. Frantz and Jimin Lee.M.Eng.in Logistic

    Ambient Positional Instability Among Teachers in Minnesota Public Schools: 2010-2011 to 2014-2015

    Get PDF
    This work is a preliminary investigation as part of a larger project on Ambient Positional Instability (API) among teachers in public schools in the United States, sponsored by the National Science Foundation and undertaken by the University of Pennsylvania. API tracks the number of teachers who change school, grade and subject(s) they teach, as well as those who leave the profession. In this paper, API is analyzed through teacher retention and churn. Retention is defined as the proportion of teachers who remain in the system each year over the period covered in analysis, whereas churn is the ratio of the newcomers and leavers to the total numbers of teachers in the system in the previous year. Detailed formulae are provided later in this paper. The purpose of this paper is to examine teacher retention and churn in the state of Minnesota from 2010 to 2015. Specifically, this paper examines 1) the retention of full-time public school teachers at the state level, district level, and school level, 2) teacher cohort retention trends in different subjects and grade levels, and finally, 3) teacher retention in the 5 largest districts of Minnesota. To analyze these issues, publicly-accessible administrative data on education staff in Minnesota from 2010 to 2015 was used. This paper proceeds as follows. First, the rationale of the API project is described. Some of the reasons for teacher retention and churn are explored and the consequences of high teacher churn and turnover are explained. Next, detailed description of the data structure, our considerations in deciding how to reconfigure the data, and the process of data reconfiguration are described. Here, we also explain the challenges we faced while working with the data files. Thereafter, the findings regarding the three issues mentioned above are summarized. Finally, the conclusions drawn from the analysis and our next steps are presented

    Estimation of the Available Rooftop Area for Installing the Rooftop Solar Photovoltaic (PV) System by Analyzing the Building Shadow Using Hillshade Analysis

    Get PDF
    AbstractFor continuous promotion of the solar PV system in buildings, it is crucial to analyze the rooftop solar PV potential. However, the rooftop solar PV potential in urban areas highly varies depending on the available rooftop area due to the building shadow. In order to estimate the available rooftop area accurately by considering the building shadow, this study proposed an estimation method of the available rooftop area for installing the rooftop solar PV system by analyzing the building shadow using Hillshade Analysis. A case study of Gangnam district in Seoul, South Korea was shown by applying the proposed estimation method

    Systematic Pedagogy to Line Balancing with EXCEL

    Get PDF
    Over the past ten years, simple and inexpensive operations research software that is user friendly to the mentor, student, and instructor has become difficult to obtain.  This is especially true since Emmons, Flowers, Khot, and Mathur’s STORM 4.0 for Windows is obsolete for current 32 and 64 bit operating systems and no longer in print.  After a diligent product and literature search, it appears there is no adequate inexpensive software that is easily available.  Assembly line balancing algorithms are heuristic methods used for balancing operations or production lines.  However, most methods employ complex calculations that are challenging to the mentor and mentee.  This paper presents a pedagogy from a systems approach using Microsoft EXCEL.  The object is to prepare a spreadsheet file with four separate worksheets that are linked to the first worksheet.  The step-by-step systematic approach allows the entry on the main worksheet of data such as an annual demand, annual time available, and process times. When the user changes these data entry points, the efficiencies of each operating or production line are automatically re-computed for all three shifts.  Worksheets use one of the several available heuristics to compute cycle times (required time between process activities) and transfers it to one, two, or three shifts (worksheets two, three, or four).  Once the spreadsheet and accompanying worksheets were completed, the results were compared to several different heuristic algorithms.  When authors were satisfied that the results were accurate and not significantly different from other examined algorithms, the final step was to develop a working pedagogy to efficiently describe the process.  This allows the user an efficient analytical tool to illustrate and explain interactions within a given process.  A local manufacturing facility used this method as a part of a monthly effort to increase line efficiency for individual workstations. The project’s results were satisfactorily tested in a production operations class.  The major advantage to the practitioner, engineer, instructor, and student is that EXCEL is readily available on all personal computers, easily understood, and is very practical. Students with very little exposure to line balancing were able to master the method within the first hour of exposure

    UniPrimer: A Web-Based Primer Design Tool for Comparative Analyses of Primate Genomes

    Get PDF
    Whole genome sequences of various primates have been released due to advanced DNA-sequencing technology. A combination of computational data mining and the polymerase chain reaction (PCR) assay to validate the data is an excellent method for conducting comparative genomics. Thus, designing primers for PCR is an essential procedure for a comparative analysis of primate genomes. Here, we developed and introduced UniPrimer for use in those studies. UniPrimer is a web-based tool that designs PCR- and DNA-sequencing primers. It compares the sequences from six different primates (human, chimpanzee, gorilla, orangutan, gibbon, and rhesus macaque) and designs primers on the conserved region across species. UniPrimer is linked to RepeatMasker, Primer3Plus, and OligoCalc softwares to produce primers with high accuracy and UCSC In-Silico PCR to confirm whether the designed primers work. To test the performance of UniPrimer, we designed primers on sample sequences using UniPrimer and manually designed primers for the same sequences. The comparison of the two processes showed that UniPrimer was more effective than manual work in terms of saving time and reducing errors

    Service Quality in a Reduced payroll Environment: Applying Queuing Analysis to Customer Perception Case Study

    Get PDF
    This study was conducted in a national retail pharmacy company’s stores inWestern North Carolinato examine the impact of the reduction of store staffing, primarily pharmacists and service staff, on customers’ satisfaction with service time.    Customer arrival rates and service times for each queue were conducted to determine optimal staffing.  A random customer survey in multiple store locations provided customers’ perceptions of service quality. Analysis determined that over 30% of the customers surveyed were dissatisfied with service time. A regression analysis demonstrated a significant linear relationship (σ = 0.05) between total service time and customer satisfaction. Study results indicate that cutting staff could result in an unacceptable loss of a competitive advantage.  Payroll cost savings of less than 70,000peryearcouldresultinlostrevenuedollarsinexcessof70,000 per year could result in lost revenue dollars in excess of 1,700,000 per year.  Thus reducing staff hours (decreasing payroll) in the short term may negatively impact long-term effectiveness and productivity

    F^2-Softmax: Diversifying Neural Text Generation via Frequency Factorized Softmax

    Full text link
    Despite recent advances in neural text generation, encoding the rich diversity in human language remains elusive. We argue that the sub-optimal text generation is mainly attributable to the imbalanced token distribution, which particularly misdirects the learning model when trained with the maximum-likelihood objective. As a simple yet effective remedy, we propose two novel methods, F^2-Softmax and MefMax, for a balanced training even with the skewed frequency distribution. MefMax assigns tokens uniquely to frequency classes, trying to group tokens with similar frequencies and equalize frequency mass between the classes. F^2-Softmax then decomposes a probability distribution of the target token into a product of two conditional probabilities of (i) frequency class, and (ii) token from the target frequency class. Models learn more uniform probability distributions because they are confined to subsets of vocabularies. Significant performance gains on seven relevant metrics suggest the supremacy of our approach in improving not only the diversity but also the quality of generated texts.Comment: EMNLP 202

    A Methodology for Appropriate Testing When Data is Heterogeneous Using EXCEL

    Get PDF
    A Methodology for Appropriate Testing When Data is Heterogeneous was originally published and copy written in the mid-1990s in Turbo Pascal and a 16-bit operating system.  While working on an ergonomic dissertation (Yearout, 1987), the author determined that the perceptual lighting preference data was heterogeneous and not normal.  Drs. Milliken and Johnson, the authors of Analysis of Messy Data Volume I: Designed Experiments (1989), advised that Satterthwaite’s Approximation with Bonferroni’s Adjustment to correct for pairwise error be used to analyze the heterogeneous data. This technique of applying linear combinations with adjusted degrees of freedom allowed the use of t-Table criteria to make group comparisons without using standard nonparametric techniques.  Thus data with unequal variances and unequal sample sizes could be analyzed without losing valuable information.  Variances to the 4th power were so large that they could not be reentered into basic calculators.  The solution was to develop an original software package which was written in Turbo Pascal on a 7 ¼ inch disk 16-bit operating system.  Current operating systems of 32 and 64 bits and more efficient programming languages have made the software obsolete and unusable. Using the old system could result either in many returns being incorrect or the system terminating.  The purpose of this research was to develop a spreadsheet algorithm with multiple interactive EXCEL worksheets that will efficiently apply Satterthwaite’s Approximation with Bonferroni’s Adjustment to solve the messy data problem.  To ensure that the pedagogy is accurate, the resulting package was successfully tested in the classroom with academically diverse students.  A comparison between this technique and EXCEL’s Add-Ins Analysis ToolPak for a t-test Two-Sample Assuming Unequal Variances was conducted using several different data sets.  The results of this comparison were that the EXCEL Add-Ins returned incorrect significant differences.  Engineers, ergonomists, psychologists, and social scientists will find the developed program very useful. A major benefit is that spreadsheets will continue to be current regardless of evolving operating systems’ status
    corecore